Skip to content

fix: disable Anthropic thinking to fix tool chain crash#415

Merged
sweetmantech merged 6 commits intotestfrom
fix/disable-anthropic-thinking-with-tool-chains
Apr 9, 2026
Merged

fix: disable Anthropic thinking to fix tool chain crash#415
sweetmantech merged 6 commits intotestfrom
fix/disable-anthropic-thinking-with-tool-chains

Conversation

@sidneyswift
Copy link
Copy Markdown
Contributor

@sidneyswift sidneyswift commented Apr 8, 2026

Summary

  • Disables Anthropic extended thinking (thinking: { type: "disabled" }) in the ToolLoopAgent constructor to prevent crash when tool chains force toolChoice

Problem

When a user selects an Anthropic model (Claude Opus 4.5, Claude Sonnet 4.5) and triggers a tool chain (create_new_artist, create_release_report), the API crashes with:

Thinking may not be enabled when tool_choice forces tool use

This happens because:

  1. getGeneralAgent.ts enables thinking: { type: "enabled", budgetTokens: 12000 } globally
  2. getPrepareStepResult.ts returns toolChoice: { type: "tool", toolName: "..." } for each chain step
  3. Anthropic's API rejects the combination — only toolChoice: "auto" or "none" are allowed with thinking

Why this approach

The Vercel AI SDK's PrepareStepResult type does not include providerOptions, so thinking cannot be disabled per-step. This is a known SDK limitation (open feature request with the exact same use case). The constructor-level disable is the only guaranteed fix.

Trade-off

Anthropic model users lose extended thinking for all conversations (not just tool chain steps). This is acceptable because:

  • Default model is openai/gpt-5-mini — majority of users are unaffected
  • OpenAI's reasoningEffort and Google's thinkingConfig are unaffected
  • Previously, Anthropic thinking was silently ignored anyway (pre-PR fix: use agent.generate() in /api/chat/generate #227), so this restores prior working behavior

Test plan

  • getGeneralAgent.test.ts — 25 tests pass
  • Tool chain tests — 46 tests pass (getPrepareStepResult, toolChains, setupChatRequest)

Made with Cursor


Summary by cubic

Fixes tool chain crashes caused by Anthropic extended thinking with forced toolChoice by assigning non-Anthropic models to every chain step via TOOL_MODEL_MAP. Tool chains now work with Claude 4.5, while extended thinking remains enabled for normal chats.

  • Bug Fixes
    • Added explicit model mappings in TOOL_MODEL_MAP for all chain tools (mostly openai/gpt-5.4-mini; update_account_info uses gemini-2.5-pro).
    • getPrepareStepResult sets the mapped model for each step, avoiding “Thinking may not be enabled when tool_choice forces tool use”.

Written for commit 40a1216. Summary will update on new commits.

Summary by CodeRabbit

  • Bug Fixes
    • Tool chains now always assign a model to the next tool, preventing unset-model cases and improving reliability.
    • Multiple tools now have explicit model mappings for more consistent behavior across actions.
    • The account-update tool continues to use a distinct model mapping where applicable.

Anthropic's API rejects requests that combine extended thinking with
forced tool_choice. The ToolLoopAgent enables thinking globally, and
prepareStep forces toolChoice for every tool chain step (create_new_artist,
create_release_report). This causes "Thinking may not be enabled when
tool_choice forces tool use" on any Anthropic model.

The Vercel AI SDK does not support overriding providerOptions per-step
(open issue vercel/ai#11761), so the fix must be at the constructor level.

Disables Anthropic thinking until the SDK adds per-step providerOptions
support or an alternative approach is implemented.

Made-with: Cursor
@vercel
Copy link
Copy Markdown
Contributor

vercel bot commented Apr 8, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
recoup-api Ready Ready Preview Apr 8, 2026 7:22pm

Request Review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 8, 2026

📝 Walkthrough

Walkthrough

Expanded TOOL_MODEL_MAP with concrete language-model assignments for many tools; getPrepareStepResult now sets result.model from TOOL_MODEL_MAP[nextToolItem.toolName] when a mapping exists. No exported signatures were changed.

Changes

Cohort / File(s) Summary
Tool model mappings
lib/chat/toolChains/toolChains.ts
Replaced placeholder/commented guidance with explicit TOOL_MODEL_MAP entries for many tools (most mapped to "openai/gpt-5.4-mini", update_account_info remains "gemini-2.5-pro").
Prepare-step model assignment
lib/chat/toolChains/getPrepareStepResult.ts
Adjusted logic to conditionally assign result.model for the selected next tool using TOOL_MODEL_MAP[nextToolItem.toolName] when a mapping exists; no public API changes.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

Tools once commented, now assigned with care,
Models lined up, each find its pair.
A subtle tweak, no signatures marred,
Chains hum along, tidy and starred. ✨

🚥 Pre-merge checks | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Solid & Clean Code ⚠️ Warning Code violates DRY principle with 'openai/gpt-5.4-mini' hardcoded 15 times in TOOL_MODEL_MAP instead of using a centralized constant. Extract repeated string into TOOL_CHAIN_DEFAULT_MODEL constant or consolidate into lib/const.ts for single-point model updates.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/disable-anthropic-thinking-with-tool-chains

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
lib/agents/generalAgent/getGeneralAgent.ts (1)

24-90: Pre-existing: Function exceeds length guidelines (~66 lines).

This isn't introduced by your PR, but the getGeneralAgent function violates the 50-line guideline for lib/**/*.ts. When time permits, consider extracting logical segments into focused helper functions:

  • fetchArtistContext(artistId) — lines 30-37
  • buildInstructionsWithImages(body, ...) — lines 40-51
  • buildAgentProviderOptions() — lines 67-81

This would improve testability and adhere to SRP. Not blocking for this fix.

As per coding guidelines: "Keep functions under 50 lines" and "Flag functions longer than 20 lines."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/agents/generalAgent/getGeneralAgent.ts` around lines 24 - 90, The
getGeneralAgent function is over the 50-line guideline; extract the logical
blocks into small helpers: implement fetchArtistContext(artistId) to encapsulate
selectAccountInfo and getKnowledgeBaseText and return { artistInstruction,
knowledgeBaseText }, implement buildInstructionsWithImages(body,
baseSystemPrompt) to create baseSystemPrompt via getSystemPrompt, call
extractImageUrlsFromMessages and buildSystemPromptWithImages and return
instructions, and implement buildAgentProviderOptions() to return the
providerOptions object used in new ToolLoopAgent; then update getGeneralAgent to
call these helpers (keep ToolLoopAgent construction but use
buildAgentProviderOptions(), fetchArtistContext(artistId), and
buildInstructionsWithImages(body, baseSystemPrompt)) so the top-level function
falls under the line limit while preserving behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@lib/agents/generalAgent/getGeneralAgent.ts`:
- Around line 24-90: The getGeneralAgent function is over the 50-line guideline;
extract the logical blocks into small helpers: implement
fetchArtistContext(artistId) to encapsulate selectAccountInfo and
getKnowledgeBaseText and return { artistInstruction, knowledgeBaseText },
implement buildInstructionsWithImages(body, baseSystemPrompt) to create
baseSystemPrompt via getSystemPrompt, call extractImageUrlsFromMessages and
buildSystemPromptWithImages and return instructions, and implement
buildAgentProviderOptions() to return the providerOptions object used in new
ToolLoopAgent; then update getGeneralAgent to call these helpers (keep
ToolLoopAgent construction but use buildAgentProviderOptions(),
fetchArtistContext(artistId), and buildInstructionsWithImages(body,
baseSystemPrompt)) so the top-level function falls under the line limit while
preserving behavior.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 57c94250-3bdd-413f-9504-493b2b9bd160

📥 Commits

Reviewing files that changed from the base of the PR and between a3eec5d and 7d90e43.

⛔ Files ignored due to path filters (1)
  • lib/agents/generalAgent/__tests__/getGeneralAgent.test.ts is excluded by !**/*.test.*, !**/__tests__/** and included by lib/**
📒 Files selected for processing (1)
  • lib/agents/generalAgent/getGeneralAgent.ts

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 2 files

Confidence score: 5/5

  • Automated review surfaced no issues in the provided summaries.
  • No files require special attention.

Requires human review: This is a global configuration change to the AI agent's behavior that disables a feature (Anthropic thinking) to resolve a crash, requiring a human review of the trade-off.

Architecture diagram
sequenceDiagram
    participant Client
    participant Agent as General Agent (getGeneralAgent)
    participant SDK as Vercel AI SDK / ToolLoop
    participant Anthropic as Anthropic API

    Note over Client,Anthropic: Request Flow with Anthropic Model (e.g., Claude 3.7)

    Client->>Agent: POST /api/chat (Request Body)
    
    Agent->>Agent: CHANGED: Initialize providerOptions<br/>Set anthropic.thinking = "disabled"
    Note right of Agent: Prevents crash when specific tools are forced

    Agent->>SDK: Start Tool Loop
    
    SDK->>SDK: getPrepareStepResult()
    Note right of SDK: Logic returns toolChoice: { type: 'tool' }

    SDK->>Anthropic: POST /v1/messages
    Note right of Anthropic: Request contains:<br/>1. tool_choice: { type: 'tool', ... }<br/>2. thinking: { type: 'disabled' }

    alt Success Path
        Anthropic-->>SDK: 200 OK (Tool Call/Response)
        SDK-->>Client: Streamed Response
    else Previous Crash Path (Pre-fix)
        Note over SDK,Anthropic: If thinking was 'enabled' and tool_choice was forced
        Anthropic-->>SDK: 400 Bad Request (Invalid combination)
        SDK-->>Client: 500 Error Crash
    end

    Note over Agent,SDK: Note: OpenAI reasoningEffort and Google thinkingConfig<br/>remain unchanged and active.
Loading

…ing conflict

Instead of disabling Anthropic thinking globally, tool chain steps now
switch to openai/gpt-5-mini (via TOOL_CHAIN_FALLBACK_MODEL) when no
specific model is mapped in TOOL_MODEL_MAP.

This preserves extended thinking for regular Anthropic conversations
while ensuring forced toolChoice works during tool chain execution.

Made-with: Cursor
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 5 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="lib/agents/generalAgent/getGeneralAgent.ts">

<violation number="1">
P0: Custom agent: **Flag AI Slop and Fabricated Changes**

This change **enables** Anthropic thinking (`type: "enabled", budgetTokens: 12000`) but the PR title, description, and summary all claim it **disables** thinking. The code does the exact opposite of its stated intent.

Per the PR's own problem statement, enabling thinking is what *causes* the crash (`"Thinking may not be enabled when tool_choice forces tool use"`). The previous code already had thinking disabled — this change reintroduces the crash rather than fixing it.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review, or fix all with cubic.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0 issues found across 2 files (changes from recent commits).

Requires human review: The model identifier 'openai/gpt-5.4-mini' appears to be a typo or invalid, which would cause tool chains to crash in production. There is also a mismatch between the PR description and diff.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
lib/chat/toolChains/getPrepareStepResult.ts (1)

60-60: Align the PrepareStepResult type with Line 60 behavior.

Line 60 always sets result.model, but the shared type still marks model as optional. Tightening that contract will improve downstream type safety and intent clarity.

♻️ Proposed contract alignment
--- a/lib/chat/toolChains/toolChains.ts
+++ b/lib/chat/toolChains/toolChains.ts
@@
 export type PrepareStepResult = {
   toolChoice?: { type: "tool"; toolName: string };
-  model?: LanguageModel;
+  model: LanguageModel;
   system?: string;
   messages?: ModelMessage[];
 };
As per coding guidelines `lib/**/*.ts`: Use TypeScript for type safety.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/chat/toolChains/getPrepareStepResult.ts` at line 60, The
PrepareStepResult type currently marks "model" as optional but
getPrepareStepResult.ts (in the getPrepareStepResult function) always assigns
result.model (line with TOOL_MODEL_MAP[nextToolItem.toolName] ||
TOOL_CHAIN_FALLBACK_MODEL), so update the shared PrepareStepResult type to make
"model" required (non-optional), then fix any callsites/types that assumed it
could be undefined (e.g., places referencing PrepareStepResult, the
getPrepareStepResult return type, and any tests) to reflect the tightened
contract; ensure exports/imports remain consistent and run type checks to catch
and correct any mismatches.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@lib/chat/toolChains/getPrepareStepResult.ts`:
- Line 60: The PrepareStepResult type currently marks "model" as optional but
getPrepareStepResult.ts (in the getPrepareStepResult function) always assigns
result.model (line with TOOL_MODEL_MAP[nextToolItem.toolName] ||
TOOL_CHAIN_FALLBACK_MODEL), so update the shared PrepareStepResult type to make
"model" required (non-optional), then fix any callsites/types that assumed it
could be undefined (e.g., places referencing PrepareStepResult, the
getPrepareStepResult return type, and any tests) to reflect the tightened
contract; ensure exports/imports remain consistent and run type checks to catch
and correct any mismatches.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 0b5bdb6e-f0c7-45e5-8245-4fd662a4c800

📥 Commits

Reviewing files that changed from the base of the PR and between 7d90e43 and 6b8d284.

⛔ Files ignored due to path filters (1)
  • lib/chat/toolChains/__tests__/getPrepareStepResult.test.ts is excluded by !**/*.test.*, !**/__tests__/** and included by lib/**
📒 Files selected for processing (2)
  • lib/chat/toolChains/getPrepareStepResult.ts
  • lib/chat/toolChains/toolChains.ts

@sidneyswift
Copy link
Copy Markdown
Contributor Author

Testing Instructions

The preview deployment is live at:
https://recoup-api-git-fix-disable-anthropic-cb22a1-recoupable-ad724970.vercel.app

How to test

  1. Open https://chat.recoupable.com
  2. Open DevTools console (Cmd+Option+J) and run:
    sessionStorage.setItem('recoup_api_override', 'https://recoup-api-git-fix-disable-anthropic-cb22a1-recoupable-ad724970.vercel.app');
  3. Refresh the page
  4. Select Claude Opus 4.5 or Claude Sonnet 4.5 from the model picker
  5. Type "create a new artist" and provide an artist name when prompted

What to verify

  • The flow does NOT crash with "Thinking may not be enabled when tool_choice forces tool use"
  • Tool chain steps execute successfully (Spotify search, account update, deep research, etc.)
  • After the chain completes, regular conversation continues on the Claude model with thinking enabled
  • Also test with "create a release report" for the second tool chain

How the fix works

Tool chain steps now switch to openai/gpt-5.4-mini (via TOOL_CHAIN_FALLBACK_MODEL) instead of inheriting the user's model. This avoids the Anthropic thinking + forced toolChoice conflict while keeping thinking enabled for regular conversations.

Reset after testing

sessionStorage.removeItem('recoup_api_override');

@sidneyswift
Copy link
Copy Markdown
Contributor Author

Test Results ✅

Tested against the preview deployment via curl with model: "anthropic/claude-sonnet-4.5" and prompt "Create a new artist named Xavier Pages".

What happened

  1. Step 1 — Claude Sonnet 4.5 with extended thinking (reasoning-start/reasoning-delta) reasoned about the request and called create_new_artist. Thinking worked correctly — no forced toolChoice at this stage.

  2. Step 2 — Tool chain activated. Model switched to openai/gpt-5.4-mini (confirmed via providerMetadata.openai in the stream). get_spotify_search was called with forced toolChoice: { type: "tool" }. No crash.

Before the fix

This would have crashed with:

Thinking may not be enabled when tool_choice forces tool use

After the fix

  • ✅ Anthropic thinking works for regular conversation steps
  • ✅ Tool chain steps switch to openai/gpt-5.4-mini and execute with forced toolChoice
  • ✅ No "Thinking may not be enabled" error
  • ✅ Artist creation flow proceeds through the chain

Test command used

curl -s -N -X POST \
  -H "Content-Type: application/json" \
  -H "x-api-key: $RECOUP_TEST_API_KEY" \
  "https://recoup-api-git-fix-disable-anthropic-cb22a1-recoupable-ad724970.vercel.app/api/chat" \
  -d '{"prompt":"Create a new artist named Xavier Pages","model":"anthropic/claude-sonnet-4.5"}'

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0 issues found across 1 file (changes from recent commits).

Requires human review: Inconsistency between PR description (referencing constructor changes) and actual diff (implementing fallback model). Modifies AI model selection logic for tool chains.

if (model) {
result.model = model;
}
result.model = TOOL_MODEL_MAP[nextToolItem.toolName] || TOOL_CHAIN_FALLBACK_MODEL;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

KISS - Why do we need a fallback?

  • Why not use the existing toolName logic?


// Forced toolChoice is incompatible with Anthropic extended thinking.
// Tool chain steps use this model unless overridden by TOOL_MODEL_MAP.
export const TOOL_CHAIN_FALLBACK_MODEL: LanguageModel = "openai/gpt-5.4-mini";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

KISS - remove the fallback and use the existing TOOL_MODEL_MAP.

…l chain tools

Addresses code review feedback — use existing TOOL_MODEL_MAP pattern
instead of a separate fallback constant. All tool chain tools now have
explicit model entries in TOOL_MODEL_MAP.

Made-with: Cursor
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
lib/chat/toolChains/toolChains.ts (1)

18-37: Consider extracting the repeated model string to a constant for DRY compliance.

The string "openai/gpt-5.4-mini" is repeated 15 times across the mapping. Extracting it to a constant would reduce duplication and make future model changes a single-line edit.

That said, I recognize explicit per-tool mappings provide clarity and allow granular overrides—so this is a recommended improvement rather than a blocker.

♻️ Proposed refactor to extract constant
+const DEFAULT_TOOL_CHAIN_MODEL = "openai/gpt-5.4-mini" as LanguageModel;
+
 // Forced toolChoice is incompatible with Anthropic extended thinking.
 // Every tool used in a chain must have a model here to avoid the conflict.
 export const TOOL_MODEL_MAP: Record<string, LanguageModel> = {
   update_account_info: "gemini-2.5-pro",
-  get_spotify_search: "openai/gpt-5.4-mini",
-  update_artist_socials: "openai/gpt-5.4-mini",
-  artist_deep_research: "openai/gpt-5.4-mini",
-  spotify_deep_research: "openai/gpt-5.4-mini",
-  get_artist_socials: "openai/gpt-5.4-mini",
-  get_spotify_artist_top_tracks: "openai/gpt-5.4-mini",
-  get_spotify_artist_albums: "openai/gpt-5.4-mini",
-  get_spotify_album: "openai/gpt-5.4-mini",
-  search_web: "openai/gpt-5.4-mini",
-  generate_txt_file: "openai/gpt-5.4-mini",
-  create_segments: "openai/gpt-5.4-mini",
-  youtube_login: "openai/gpt-5.4-mini",
-  web_deep_research: "openai/gpt-5.4-mini",
-  create_knowledge_base: "openai/gpt-5.4-mini",
-  send_email: "openai/gpt-5.4-mini",
+  get_spotify_search: DEFAULT_TOOL_CHAIN_MODEL,
+  update_artist_socials: DEFAULT_TOOL_CHAIN_MODEL,
+  artist_deep_research: DEFAULT_TOOL_CHAIN_MODEL,
+  spotify_deep_research: DEFAULT_TOOL_CHAIN_MODEL,
+  get_artist_socials: DEFAULT_TOOL_CHAIN_MODEL,
+  get_spotify_artist_top_tracks: DEFAULT_TOOL_CHAIN_MODEL,
+  get_spotify_artist_albums: DEFAULT_TOOL_CHAIN_MODEL,
+  get_spotify_album: DEFAULT_TOOL_CHAIN_MODEL,
+  search_web: DEFAULT_TOOL_CHAIN_MODEL,
+  generate_txt_file: DEFAULT_TOOL_CHAIN_MODEL,
+  create_segments: DEFAULT_TOOL_CHAIN_MODEL,
+  youtube_login: DEFAULT_TOOL_CHAIN_MODEL,
+  web_deep_research: DEFAULT_TOOL_CHAIN_MODEL,
+  create_knowledge_base: DEFAULT_TOOL_CHAIN_MODEL,
+  send_email: DEFAULT_TOOL_CHAIN_MODEL,
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/chat/toolChains/toolChains.ts` around lines 18 - 37, TOOL_MODEL_MAP
repeats the literal "openai/gpt-5.4-mini" many times; define a single constant
(e.g., DEFAULT_TOOL_MODEL = "openai/gpt-5.4-mini") and replace the repeated
string values in TOOL_MODEL_MAP with that constant while leaving unique entries
like update_account_info: "gemini-2.5-pro" untouched so future model changes
only require editing the constant.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@lib/chat/toolChains/toolChains.ts`:
- Around line 18-37: TOOL_MODEL_MAP repeats the literal "openai/gpt-5.4-mini"
many times; define a single constant (e.g., DEFAULT_TOOL_MODEL =
"openai/gpt-5.4-mini") and replace the repeated string values in TOOL_MODEL_MAP
with that constant while leaving unique entries like update_account_info:
"gemini-2.5-pro" untouched so future model changes only require editing the
constant.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: bdea683b-d278-46f1-8be3-41aaf1bf0479

📥 Commits

Reviewing files that changed from the base of the PR and between 1df0052 and a2e96d2.

📒 Files selected for processing (2)
  • lib/chat/toolChains/getPrepareStepResult.ts
  • lib/chat/toolChains/toolChains.ts
💤 Files with no reviewable changes (1)
  • lib/chat/toolChains/getPrepareStepResult.ts

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 3 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="lib/chat/toolChains/toolChains.ts">

<violation number="1" location="lib/chat/toolChains/toolChains.ts:22">
P1: Custom agent: **Flag AI Slop and Fabricated Changes**

15 map entries hardcoded to the same `"openai/gpt-5.4-mini"` value replace the removed `TOOL_CHAIN_FALLBACK_MODEL` default. This is duplicated configuration that's fragile: any new tool added to a chain without a map entry here silently regresses to the original crash (the consumer in `getPrepareStepResult.ts` only sets the model `if (model)` — there is no fallback).

Restore a default/fallback constant and keep `TOOL_MODEL_MAP` only for overrides (like `update_account_info → gemini-2.5-pro`).</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review, or fix all with cubic.

update_account_info: "gemini-2.5-pro",
// Add other tools that need specific models here
// e.g., create_segments: "gpt-4-turbo",
get_spotify_search: "openai/gpt-5.4-mini",
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Custom agent: Flag AI Slop and Fabricated Changes

15 map entries hardcoded to the same "openai/gpt-5.4-mini" value replace the removed TOOL_CHAIN_FALLBACK_MODEL default. This is duplicated configuration that's fragile: any new tool added to a chain without a map entry here silently regresses to the original crash (the consumer in getPrepareStepResult.ts only sets the model if (model) — there is no fallback).

Restore a default/fallback constant and keep TOOL_MODEL_MAP only for overrides (like update_account_info → gemini-2.5-pro).

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/chat/toolChains/toolChains.ts, line 22:

<comment>15 map entries hardcoded to the same `"openai/gpt-5.4-mini"` value replace the removed `TOOL_CHAIN_FALLBACK_MODEL` default. This is duplicated configuration that's fragile: any new tool added to a chain without a map entry here silently regresses to the original crash (the consumer in `getPrepareStepResult.ts` only sets the model `if (model)` — there is no fallback).

Restore a default/fallback constant and keep `TOOL_MODEL_MAP` only for overrides (like `update_account_info → gemini-2.5-pro`).</comment>

<file context>
@@ -16,12 +16,24 @@ export type PrepareStepResult = {
+// Every tool used in a chain must have a model here to avoid the conflict.
 export const TOOL_MODEL_MAP: Record<string, LanguageModel> = {
   update_account_info: "gemini-2.5-pro",
+  get_spotify_search: "openai/gpt-5.4-mini",
+  update_artist_socials: "openai/gpt-5.4-mini",
+  artist_deep_research: "openai/gpt-5.4-mini",
</file context>
Fix with Cubic

@sidneyswift
Copy link
Copy Markdown
Contributor Author

Test Results — After Code Review Refactor ✅

Tested latest commit (a2e96d2) against preview deployment with model: "anthropic/claude-sonnet-4.5".

Tool chain execution trace

Step Model (from providerMetadata) Tool Status
1 anthropic (thinking enabled) create_new_artist
2 openai get_spotify_search
3 google (gemini via TOOL_MODEL_MAP) update_account_info
4 openai update_artist_socials
5 openai artist_deep_research
6 openai spotify_deep_research
7 openai get_artist_socials
8 openai get_spotify_artist_top_tracks
9 openai get_spotify_artist_albums
10 openai get_spotify_album

All 10 steps executed without error. Anthropic thinking works on step 1, then TOOL_MODEL_MAP routes each chain tool to its mapped model. No "Thinking may not be enabled" crash.

@sweetmantech
Copy link
Copy Markdown
Contributor

Verification Test Results ✅

Tested both prod (current) and preview (this PR) with the same request:

curl -s -N -X POST \
  -H "Content-Type: application/json" \
  -H "x-api-key: $API_KEY" \
  "$ENDPOINT/api/chat" \
  -d '{"prompt":"Create a new artist named Test Artist","model":"anthropic/claude-sonnet-4.5"}'

Prod (before fix) ❌

  • Step 1: Claude Sonnet 4.5 with extended thinking — create_new_artist called successfully
  • Step 2: Tool chain activates with forced toolChoicecrashes:

    "Thinking may not be enabled when tool_choice forces tool use."

Preview (with fix) ✅

  • Step 1: Claude Sonnet 4.5 with extended thinking — create_new_artist called successfully
  • Step 2: Tool chain switches to openai/gpt-5.4-miniget_spotify_search called, full chain completed
  • Step 3+: Remaining chain steps completed (knowledge base report generated)
  • Final response delivered with no errors

Bug confirmed on prod, fix confirmed on preview.

🤖 Generated with Claude Code

@sweetmantech sweetmantech merged commit 24e660a into test Apr 9, 2026
5 of 6 checks passed
@sweetmantech sweetmantech deleted the fix/disable-anthropic-thinking-with-tool-chains branch April 9, 2026 00:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants